104 research outputs found

    A Study on Performance and Power Efficiency of Dense Non-Volatile Caches in Multi-Core Systems

    Full text link
    In this paper, we present a novel cache design based on Multi-Level Cell Spin-Transfer Torque RAM (MLC STTRAM) that can dynamically adapt the set capacity and associativity to use efficiently the full potential of MLC STTRAM. We exploit the asymmetric nature of the MLC storage scheme to build cache lines featuring heterogeneous performances, that is, half of the cache lines are read-friendly, while the other is write-friendly. Furthermore, we propose to opportunistically deactivate ways in underutilized sets to convert MLC to Single-Level Cell (SLC) mode, which features overall better performance and lifetime. Our ultimate goal is to build a cache architecture that combines the capacity advantages of MLC and performance/energy advantages of SLC. Our experiments show an improvement of 43% in total numbers of conflict misses, 27% in memory access latency, 12% in system performance, and 26% in LLC access energy, with a slight degradation in cache lifetime (about 7%) compared to an SLC cache

    A Realistic Mobility Model for Wireless Networks of Scale-Free Node Connectivity

    Get PDF
    Recent studies discovered that many of social, natural and biological networks are characterised by scale-free power-law connectivity distribution. We envision that wireless networks are directly deployed over such real-world networks to facilitate communication among participating entities. This paper proposes Clustered Mobility Model (CMM), in which nodes do not move randomly but are attracted more to more populated areas. Unlike most of prior mobility models, CMM is shown to exhibit scale-free connectivity distribution. Extensive simulation study has been conducted to highlight the difference between Random WayPoint (RWP) and CMM by measuring network capacities at the physical, link and network layers

    Cache Invalidation Strategies for Internet-based Vehicular Ad Hoc Networks

    Get PDF
    Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes

    Cache Invalidation Strategies for Internet-based Vehicular Ad Hoc Networks

    Get PDF
    Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes

    RandomCast: An Energy-Efficient Communication Scheme for Mobile Ad Hoc Networks

    Get PDF
    In mobile ad hoc networks (MANETs), every node overhears every data transmission occurring in its vicinity and thus, consumes energy unnecessarily. However, since some MANET routing protocols such as dynamic source routing (DSR) collect route information via overhearing, they would suffer if they are used in combination with 802.11 PSM. Allowing no overhearing may critically deteriorate the performance of the underlying routing protocol, while unconditional overhearing may offset the advantage of using PSM. This paper proposes a new communication mechanism, called RandomCast, via which a sender can specify the desired level of overhearing, making a prudent balance between energy and routing performance. In addition, it reduces redundant rebroadcasts for a broadcast packet, and thus, saves more energy. Extensive simulation using NS-2 shows that RandomCast is highly energy-efficient compared to conventional 802.11 as well as 802.11 PSM-based schemes, in terms of total energy consumption, energy goodput, and energy balance

    LAPSES: A Recipe for High-Performance Adaptive Router Design

    Get PDF
    Earlier research has shown that adaptive routing can help in improving network performance. However, it has not received adequate attention in commercial routers mainly due to the additional hardware complexity, and the perceived cost and performance degradation that may result from this complexity. These concerns can be mitigated if one can design a cost-effective router that can support adaptive routing. This paper proposes a three step recipe — Look-Ahead routing, intelligent Path Selection, and an Economic Storage implementation, called the LAPSES approach — for cost-effective high performance pipelined adaptive router design. The first step, look-ahead routing, reduces a pipeline stage in the router by making table lookup and arbitration concurrent. Next, three new traffic-sensitive path selection heuristics (LRU, LFU and MAX-CREDIT) are proposed to select one of the available alternate paths. Finally, two techniques for reducing routing table size of the adaptive router are presented. These are called meta-table routing and economical storage. The proposed economical storage needs a routing table with only 9 and 27 entries for two and three dimensional meshes, respectively. All these design ideas are evaluated on a (16 16) mesh network via simulation. A fully adaptive algorithm and various traffic patterns are used to examine the performance benefits. Performance results show that the look-ahead design as well as the path selection heuristics boost network performance, while the economical storage approach turns out to be an ideal choice in comparison to full-table and meta-table options. We believe the router resulting from these three design enhancements can make adaptive routing a viable choice for interconnects.
    • …
    corecore